State space models (SSMs) have demonstrated state-of-the-art sequence modeling performance in some modalities, but underperform attention in language modeling. Moreover, despite scaling nearly linearly in sequence length instead of quadratically, SSMs are still slower than Transformers due to poor hardware utilization. In this paper, we make progress on understanding the expressivity gap between SSMs and attention in language modeling, and on reducing the hardware barrier between SSMs and attention. First, we use synthetic language modeling tasks to understand the gap between SSMs and attention. We find that existing SSMs struggle with two capabilities: recalling earlier tokens in the sequence and comparing tokens across the sequence. To understand the impact on language modeling, we propose a new SSM layer, H3, that is explicitly designed for these abilities. H3 matches attention on the synthetic languages and comes within 0.4 PPL of Transformers on OpenWebText. Furthermore, a hybrid 125M-parameter H3-attention model that retains two attention layers surprisingly outperforms Transformers on OpenWebText by 1.0 PPL. Next, to improve the efficiency of training SSMs on modern hardware, we propose FlashConv. FlashConv uses a fused block FFT algorithm to improve efficiency on sequences up to 8K, and introduces a novel state passing algorithm that exploits the recurrent properties of SSMs to scale to longer sequences. FlashConv yields 2$\times$ speedup on the long-range arena benchmark and allows hybrid language models to generate text 1.6$\times$ faster than Transformers. Using FlashConv, we scale hybrid H3-attention language models up to 1.3B parameters on the Pile and find promising initial results, achieving lower perplexity than Transformers and outperforming Transformers in zero- and few-shot learning on a majority of tasks in the SuperGLUE benchmark.
translated by 谷歌翻译
使用通过组成可逆层获得的地图进行标准化模型复杂概率分布。特殊的线性层(例如蒙版和1x1卷积)在现有体系结构中起着关键作用,因为它们在具有可拖动的Jacobians和倒置的同时增加表达能力。我们提出了一个基于蝴蝶层的新的可逆线性层家族,理论上捕获复杂的线性结构,包括排列和周期性,但可以有效地倒置。这种代表力是我们方法的关键优势,因为这些结构在许多现实世界数据集中很常见。根据我们的可逆蝴蝶层,我们构建了一个新的称为蝴蝶流的归一化流量模型。从经验上讲,我们证明蝴蝶不仅可以在MNIST,CIFAR-10和Imagenet 32​​x32等自然图像上实现强密度估计结果,而且还可以在结构化数据集中获得明显更好的对数可能性,例如Galaxy图像和Mimic-III患者群体 - - 同时,在记忆和计算方面比相关基线更有效。
translated by 谷歌翻译
沟通压缩是现代分布式学习系统的至关重要技术,可以减轻其在较慢的网络上的交流瓶颈。尽管最近对数据并行式训练的梯度压缩进行了深入的研究,但压缩了通过管道并行性训练的模型的激活仍然是一个空旷的问题。在本文中,我们提出了AC-SGD,这是一种新型的激活压缩算法,用于在慢速网络上进行通信有效的管道并行性训练。 AC-SGD与以前的激活压缩方面的努力不同,而不是直接压缩激活值,而是压缩激活的变化。这使我们能够首次向我们的知识表明,仍然可以实现$ o(1/\ sqrt {t})$收敛速率,即激活压缩的非convex目标,而无需对梯度做出假设无偏见对于具有非线性激活功能的深度学习模型不符合。然后,我们证明AC-SGD可以有效地优化和实施,而无需额外的端到端运行时开销。我们将AC-SGD评估为微调语言具有高达15亿个参数的模型,将激活压缩至2-4位。AC-SGD在较慢的网络中可提供高达4.3倍的端到端速度,而无需牺牲模型质量。此外,我们还表明,AC-SGD可以与最先进的梯度压缩算法结合使用,以启用“端到端通信压缩:机器之间的所有通信,包括模型梯度,远期激活和后退梯度压缩为较低的精度。这提供了高达4.9倍的端到端加速,而无需牺牲模型质量。
translated by 谷歌翻译
训练基金会模型(例如GPT-3和Palm)可能非常昂贵,通常涉及数以万计的GPU连续运行数月。这些模型通常经过专门的群集培训,这些群集具有快速,均匀的互连,并使用精心设计的软件系统来支持数据并行性和模型/管道并行性。这样的专用集群可能是昂贵且难以获得的。我们可以相反,可以利用更大量的分散,异质和较低的互连计算?先前的工作研究了可以纯粹以数据并行方式训练的相对较小模型的异质,分散的设置重点。模型平行基础模型培训(例如威震天)的最先进的方案仅考虑均匀的数据中心设置。在本文中,我们介绍了第一个研究大型基础模型的研究,该模型在异质网络上的去中心化制度中进行了模型并行性。我们的主要技术贡献是一种调度算法,该算法将不同的计算“任务”在培训基础模型中分配给通过缓慢的异质网络连接的一组分散的GPU设备。我们提供了正式的成本模型,并进一步提出了一种有效的进化算法,以找到最佳分配策略。我们进行了广泛的实验,这些实验代表了使用现实世界网络测量模拟的地理分布设备进行学习的不同方案。在最极端的情况下,在跨越3大洲的8个不同的城市中,我们的方法比以前的最新培训系统(Megatron)快4.8倍。
translated by 谷歌翻译
变压器在长序列上是缓慢的,渴望记忆力,因为自我注意的时间和记忆复杂性在序列上是二次的。近似关注方法试图通过交易模型质量以降低计算复杂性来解决此问题,但通常无法实现墙壁锁定的加速。我们认为,缺失的原则是提出注意力算法,以考虑读取和在GPU记忆层次之间写入。我们提出了FlashAttention,这是一种IO意识的精确注意算法,该算法使用平铺来减少GPU高带宽内存(HBM)和GPU芯片SRAM之间的内存读数/写入/写入。我们分析了闪存的IO复杂性,表明它所需的HBM访问少于标准注意力,并且对于一系列SRAM尺寸而言是最佳的。我们还扩展了闪光词,以引起障碍物的注意,从而产生了比任何现有的近似关注方法更快的近似关注算法。闪存火车的变压器​​比现有基准快:与MLPERF 1.1训练速度记录相比,Bert-Large(第512秒)的端到端壁式锁定加速度为15%,GPT-2上的3 $ \ times $ speedup(seq) 。闪存表现和块状闪光词可在变压器中实现更长的上下文,从而产生更高质量的模型(GPT-2上的0.7更好的困惑和长期分类的6.4点升力)和全新的功能:第一个实现优于更好的Chance的变压器PATH-X挑战(Seq。Length16K,61.4%精度)和PATH-256(Seq。Length64K,63.1%精度)上的性能。
translated by 谷歌翻译
过度分辨的神经网络概括井,但训练昂贵。理想情况下,人们希望减少其计算成本,同时保留其概括的益处。稀疏的模型培训是实现这一目标的简单和有希望的方法,但随着现有方法与准确性损失,慢速训练运行时的困难或困难,仍然存在挑战,仍然存在困难的挑战。核心问题是,在离散的一组稀疏矩阵上搜索稀疏性掩模是困难和昂贵的。为了解决此问题,我们的主要见解是通过具有称为蝴蝶矩阵产品的固定结构的固定结构来优化优化稀疏矩阵的连续超集。随着蝴蝶矩阵不是硬件效率,我们提出了简单的蝴蝶(块和平坦)的变体来利用现代硬件。我们的方法(像素化蝴蝶)使用基于扁平块蝴蝶和低秩矩阵的简单固定稀疏模式,以缩小大多数网络层(例如,注意,MLP)。我们经验验证了像素化蝴蝶比蝴蝶快3倍,加快培训,以实现有利的准确性效率权衡。在ImageNet分类和Wikitext-103语言建模任务中,我们的稀疏模型训练比致密的MLP - 混频器,视觉变压器和GPT-2媒体更快地训练高达2.5倍,没有精确下降。
translated by 谷歌翻译
Automl的一个重要目标是自动化在探索域内的新任务上的神经网络设计。通过这一目标激励,我们研究了使用户能够发现来自其特定域的数据的正确神经操作的问题。我们介绍一个名为XD-Operation的搜索空间,这些操作模仿标准多通道卷曲的归纳偏差,同时更具表现力:我们证明它包括多个应用程序区域的许多命名操作。从Reset等任何标准骨干开始,我们展示了如何通过XD操作将其转换为搜索空间以及如何使用简单的权重共享方案遍历空间。在各种任务组合 - 求解PDES,距离蛋白质折叠和音乐建模的距离预测 - 我们的方法一致地产生比基线网络更低的误差的模型,并且通常更低的误差比专业设计的域特定方法更低。
translated by 谷歌翻译
Early detection of relevant locations in a piece of news is especially important in extreme events such as environmental disasters, war conflicts, disease outbreaks, or political turmoils. Additionally, this detection also helps recommender systems to promote relevant news based on user locations. Note that, when the relevant locations are not mentioned explicitly in the text, state-of-the-art methods typically fail to recognize them because these methods rely on syntactic recognition. In contrast, by incorporating a knowledge base and connecting entities with their locations, our system successfully infers the relevant locations even when they are not mentioned explicitly in the text. To evaluate the effectiveness of our approach, and due to the lack of datasets in this area, we also contribute to the research community with a gold-standard multilingual news-location dataset, NewsLOC. It contains the annotation of the relevant locations (and their WikiData IDs) of 600+ Wikinews articles in five different languages: English, French, German, Italian, and Spanish. Through experimental evaluations, we show that our proposed system outperforms the baselines and the fine-tuned version of the model using semi-supervised data that increases the classification rate. The source code and the NewsLOC dataset are publicly available for being used by the research community at https://github.com/vsuarezpaniagua/NewsLocation.
translated by 谷歌翻译
This paper aims to improve the Warping Planer Object Detection Network (WPOD-Net) using feature engineering to increase accuracy. What problems are solved using the Warping Object Detection Network using feature engineering? More specifically, we think that it makes sense to add knowledge about edges in the image to enhance the information for determining the license plate contour of the original WPOD-Net model. The Sobel filter has been selected experimentally and acts as a Convolutional Neural Network layer, the edge information is combined with the old information of the original network to create the final embedding vector. The proposed model was compared with the original model on a set of data that we collected for evaluation. The results are evaluated through the Quadrilateral Intersection over Union value and demonstrate that the model has a significant improvement in performance.
translated by 谷歌翻译
Collecting large-scale medical datasets with fully annotated samples for training of deep networks is prohibitively expensive, especially for 3D volume data. Recent breakthroughs in self-supervised learning (SSL) offer the ability to overcome the lack of labeled training samples by learning feature representations from unlabeled data. However, most current SSL techniques in the medical field have been designed for either 2D images or 3D volumes. In practice, this restricts the capability to fully leverage unlabeled data from numerous sources, which may include both 2D and 3D data. Additionally, the use of these pre-trained networks is constrained to downstream tasks with compatible data dimensions. In this paper, we propose a novel framework for unsupervised joint learning on 2D and 3D data modalities. Given a set of 2D images or 2D slices extracted from 3D volumes, we construct an SSL task based on a 2D contrastive clustering problem for distinct classes. The 3D volumes are exploited by computing vectored embedding at each slice and then assembling a holistic feature through deformable self-attention mechanisms in Transformer, allowing incorporating long-range dependencies between slices inside 3D volumes. These holistic features are further utilized to define a novel 3D clustering agreement-based SSL task and masking embedding prediction inspired by pre-trained language models. Experiments on downstream tasks, such as 3D brain segmentation, lung nodule detection, 3D heart structures segmentation, and abnormal chest X-ray detection, demonstrate the effectiveness of our joint 2D and 3D SSL approach. We improve plain 2D Deep-ClusterV2 and SwAV by a significant margin and also surpass various modern 2D and 3D SSL approaches.
translated by 谷歌翻译